2,266 research outputs found

    A Monomial-Oriented GVW for Computing Gr\"obner Bases

    Full text link
    The GVW algorithm, presented by Gao et al., is a signature-based algorithm for computing Gr\"obner bases. In this paper, a variant of GVW is presented. This new algorithm is called a monomial-oriented GVW algorithm or mo-GVW algorithm for short. The mo-GVW algorithm presents a new frame of GVW and regards {\em labeled monomials} instead of {\em labeled polynomials} as basic elements of the algorithm. Being different from the original GVW algorithm, for each labeled monomial, the mo-GVW makes efforts to find the smallest signature that can generate this monomial. The mo-GVW algorithm also avoids generating J-pairs, and uses efficient methods of searching reducers and checking criteria. Thus, the mo-GVW algorithm has a better performance during practical implementations

    Joint Parametric Fault Diagnosis and State Estimation Using KF-ML Method

    Get PDF

    Study of Nonlinear Parameter Identification Using UKF and Maximum Likelihood Method

    Get PDF

    Time-Varying FOPDT Modeling and On-line Parameter Identification

    Get PDF

    Understanding Generalization of Federated Learning via Stability: Heterogeneity Matters

    Full text link
    Generalization performance is a key metric in evaluating machine learning models when applied to real-world applications. Good generalization indicates the model can predict unseen data correctly when trained under a limited number of data. Federated learning (FL), which has emerged as a popular distributed learning framework, allows multiple devices or clients to train a shared model without violating privacy requirements. While the existing literature has studied extensively the generalization performances of centralized machine learning algorithms, similar analysis in the federated settings is either absent or with very restrictive assumptions on the loss functions. In this paper, we aim to analyze the generalization performances of federated learning by means of algorithmic stability, which measures the change of the output model of an algorithm when perturbing one data point. Three widely-used algorithms are studied, including FedAvg, SCAFFOLD, and FedProx, under convex and non-convex loss functions. Our analysis shows that the generalization performances of models trained by these three algorithms are closely related to the heterogeneity of clients' datasets as well as the convergence behaviors of the algorithms. Particularly, in the i.i.d. setting, our results recover the classical results of stochastic gradient descent (SGD).Comment: Submitted to NeurIPS 202
    • …
    corecore